19,671 research outputs found

    The Bidder's Curse

    Get PDF
    We employ a novel approach to identify overbidding in the field. We compare auction prices to fixed prices for the same item on the same webpage. In detailed board-game data, 42 percent of auctions exceed the simultaneous fixed price. The result replicates in a broad cross-section of auctions (48 percent). A small fraction of overbidders, 17 percent, suffices to generate the overbidding. The observed behavior is inconsistent with rational behavior, even allowing for uncertainty and switching costs, since also the expected auction price exceeds the fixed price. Limited attention to outside options is most consistent with our results.

    Refining the Spin Hamiltonian in the Spin-1/2 Kagome Lattice Antiferromagnet ZnCu3_{3}(OH)6_{6}Cl2_{2} using Single Crystals

    Full text link
    We report thermodynamic measurements of the S=1/2 kagome lattice antiferromagnet ZnCu3_{3}(OH)6_{6}Cl2_{2}, a promising candidate system with a spin-liquid ground state. Using single crystal samples, the magnetic susceptibility both perpendicular and parallel to the kagome plane has been measured. A small, temperature-dependent anisotropy has been observed, where χz/χp>1\chi_{z}/ \chi_{p} > 1 at high temperatures and χz/χp<1\chi_{z}/ \chi_{p} < 1 at low temperatures. Fits of the high-temperature data to a Curie-Weiss model also reveal an anisotropy. By comparing with theoretical calculations, the presence of a small easy-axis exchange anisotropy can be deduced as the primary perturbation to the dominant Heisenberg nearest neighbor interaction. These results have great bearing on the interpretation of theoretical calculations based on the kagome Heisenberg antiferromagnet model to the experiments on ZnCu3_{3}(OH)6_{6}Cl2_{2}.Comment: 4 pages, 4 figure

    Distinctive-attribute Extraction for Image Captioning

    Full text link
    Image captioning, an open research issue, has been evolved with the progress of deep neural networks. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are employed to compute image features and generate natural language descriptions in the research. In previous works, a caption involving semantic description can be generated by applying additional information into the RNNs. In this approach, we propose a distinctive-attribute extraction (DaE) which explicitly encourages significant meanings to generate an accurate caption describing the overall meaning of the image with their unique situation. Specifically, the captions of training images are analyzed by term frequency-inverse document frequency (TF-IDF), and the analyzed semantic information is trained to extract distinctive-attributes for inferring captions. The proposed scheme is evaluated on a challenge data, and it improves an objective performance while describing images in more detail.Comment: 14 main pages, 4 supplementary page
    corecore